12 research outputs found

    PickCells: A Physically Reconfigurable Cell-composed Touchscreen

    Get PDF
    Touchscreens are the predominant medium for interactions with digital services; however, their current fixed form factor narrows the scope for rich physical interactions by limiting interaction possibilities to a single, planar surface. In this paper we introduce the concept of PickCells, a fully reconfigurable device concept composed of cells, that breaks the mould of rigid screens and explores a modular system that affords rich sets of tangible interactions and novel acrossdevice relationships. Through a series of co-design activities – involving HCI experts and potential end-users of such systems – we synthesised a design space aimed at inspiring future research, giving researchers and designers a framework in which to explore modular screen interactions. The design space we propose unifies existing works on modular touch surfaces under a general framework and broadens horizons by opening up unexplored spaces providing new interaction possibilities. In this paper, we present the PickCells concept, a design space of modular touch surfaces, and propose a toolkit for quick scenario prototyping

    Emergeables: Deformable Displays for Continuous Eyes-Free Mobile Interaction

    Get PDF
    ABSTRACT In this paper we present the concept of Emergeables -mobile surfaces that can deform or 'morph' to provide fully-actuated, tangible controls. Our goal in this work is to provide the flexibility of graphical touchscreens, coupled with the affordance and tactile benefits offered by physical widgets. In contrast to previous research in the area of deformable displays, our work focuses on continuous controls (e.g., dials or sliders), and strives for fully-dynamic positioning, providing versatile widgets that can change shape and location depending on the user's needs. We describe the design and implementation of two prototype emergeables built to demonstrate the concept, and present an in-depth evaluation that compares both with a touchscreen alternative. The results show the strong potential of emergeables for on-demand, eyes-free control of continuous parameters, particularly when comparing the accuracy and usability of a high-resolution emergeable to a standard GUI approach. We conclude with a discussion of the level of resolution that is necessary for future emergeables, and suggest how high-resolution versions might be achieved

    Deploying and evaluating a mixed reality mobile treasure hunt: Snap2Play

    No full text
    With the current trend, we can anticipate that future mobile phones will have ever-increasing computational power and be able to embed several captors/effectors including cameras, GPS, orientation sensors, tactile surfaces and vibro-tactile display. Such powerful mobile platforms enable us to deploy mixed reality systems. Many studies on mobile mixed reality focus on games. In this paper, we describe the deployment and a user study of a mixed reality location-based mobile treasure hunt, Snap2Play[1], using technologies such as place recognition, accelerometers and GPS tracking for enhancing the interaction with the game and therefore the game playability. The game that we deployed and tested is running on an off-the-shelf camera phone.Yilun You, Tat Jun Chin, Joo Hwee Lim, Jean-Pierre Chevallet, CĂ©line Coutrix & Laurence Nigayhttp://mobilehci2008.telin.nl

    Hochhäuser : Entwerfen, Planen, Konstruieren ; Tagungsband RWTH Aachen, 30. - 31.03.1995 ; [Entwurf, Management, Konstruktion + Tragwerk, Infrastruktur + Haustechnik, Fassaden]

    No full text
    The last few years have seen an explosion of interaction possibilities opened up by ubiquitous computing, mobile devices, and tangible interaction. Our methods of modelling interaction, however, have not kept up. As is to be expected with such a rich situation, there are many ways in which interaction might be modelled, focussing, for example, on user tasks, physical location(s) and mobility, data flows or software elements. In this paper, we present a model and modelling technique intended to capture key aspects of user’s interaction of interest to interactive system designers, at the stage of requirements capture and early design. In particular, we characterise the interaction as a physically mediated information exchange, emphasizing the physical entities involved and their relationships with the user and with one another. We apply the model to two examples in order to illustrate its expressive power

    SplitSlider: A Tangible Interface to Input Uncertainty

    No full text
    International audienceExperiencing uncertainty is common when answering questionnaires. E.g., users are not always sure to answer how often they use trains. Enabling users to input their uncertainty is thus important to increase the data's reliability and to make better decision based on the data. However, few interfaces have been explored to support uncertain input, especially with TUIs. TUIs are more discoverable than GUIs and better support simultaneous input of multiple parameters. It motivates us to explore different TUI designs to input users' best estimate answer (value) and uncertainty. In this paper, we first generate 5 TUI designs that can input both value and uncertainty and build low-fidelity prototypes. We then conduct focus group interviews to evaluate the prototypes and implement the best design, SplitSlider, as a working prototype. A lab study with SplitSlider shows that one third of the participants (4/12) were able to discover the uncertainty input function without any explanation, and once explained, all of them could easily understand the concept and input uncertainty
    corecore